Goto

Collaborating Authors

 terrorist propaganda


How AI is creating a safer online world

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. From social media cyberbullying to assault in the metaverse, the Internet can be a dangerous place. Online content moderation is one of the most important ways companies can make their platforms safer for users. However, moderating content is no easy task. The volume of content online is staggering.


Halting the Flow of Terrorist Propaganda with AI

#artificialintelligence

Extremist groups like ISIS have used the internet to propagate their ideologies and recruit individuals for more than a decade. Social media played an important role in the rise of ISIS [1], and increased terrorist activity online is described by the United Nations Office of Counter Terrorism as practically synonymous with modern terrorism [2]. The amount of ISIS-related content is staggering; hundreds of millions of pieces of extremist information are posted on the internet every year. A major problem is identifying ISIS propaganda in the first place; the group evades detection in numerous ways, including mixing their material with content from legitimate news outlets, blurring ISIS branding, and hijacking Facebook accounts [3]. When you combine the evasive tactics with highly heterogeneous and dynamic online environments, traditional content analysis fails to properly characterize which online material is extremist in origin and which is not.


Facebook: AI will protect you

Engadget

Artificial intelligence is a key part of everything Facebook does, from chat bots in Messenger to powering the personalized recommendations you get on apps like Instagram. But, as great as the technology is to create new and deeper experiences for users, Facebook says the most important role of AI is to keep its community safe. Today at F8, the company's Chief Technology Officer, Mike Schroepfer, highlighted how valuable the tech has become to combating abuse on its platform, including hate speech, bullying and terrorist content. Schroepfer pointed to stats Facebook revealed last month, which showed that its AI tools removed almost 2 million pieces of terrorist propaganda, with 99 percent of those being spotted before a human even reported them. Shroepfer said that, even though these are promising numbers, Facebook knows there's still plenty of work to be done and it needs to keep evolving the technology -- especially because the bad actors promoting this type of content keep getting smarter themselves.


AI will solve Facebook's most vexing problems, Mark Zuckerberg says. Just don't ask when or how.

#artificialintelligence

Artificial intelligence will solve Facebook's most vexing problems, chief executive Mark Zuckerberg insists. He just can't say when, or how. Zuckerberg referred to AI technology more than 30 times during ten hours of questioning from congressional lawmakers Tuesday and Wednesday, saying that it would one day be smart, sophisticated and eagle-eyed enough to fight against a vast variety of platform-spoiling misbehavior, including fake news, hate speech, discriminatory ads and terrorist propaganda. Over the next five to 10 years, he said, artificial intelligence would prove a champion for the world's largest social network in resolving its most pressing crises on a global scale -- while also helping the company dodge pesky questions about censorship, fairness and human moderation. "We started off in my dorm room with not a lot of resources and not having the AI technology to be able to proactively identify a lot of this stuff," Zuckerberg told the lawmakers, referring to Facebook's famous origin story.


AI will solve Facebook's most vexing problems, Mark Zuckerberg says. Just don't ask when or how.

Washington Post - Technology News

Artificial intelligence will solve Facebook's most vexing problems, chief executive Mark Zuckerberg insists. He just can't say when, or how. Zuckerberg referred to AI technology about 23 times during his five-hour testimony before a joint Senate committee hearing Tuesday, saying that it would one day be smart, sophisticated and eagle-eyed enough to fight against a vast variety of platform-spoiling misbehavior, including fake news, hate speech, discriminatory ads and terrorist propaganda. Over the next five to 10 years, he said, artificial intelligence would prove a champion for the world's largest social network in resolving its most pressing crises on a global scale -- while also helping the company dodge pesky questions about censorship, fairness and human moderation. "We started off in my dorm room with not a lot of resources and not having the AI technology to be able to proactively identify a lot of this stuff," Zuckerberg told the lawmakers, referring to Facebook's famous origin story.


Internet giants given one hour deadline to take down terrorist propaganda

The Independent - Tech

Internet giants Google, Facebook and Twitter are facing renewed pressure to tackle the problem of terrorist propaganda online after the European Commission (EC) gave them just a one hour deadline to remove offensive content from their pages or face penalties. The EC's demand comes at a time when the major search and social media companies are being urged to do more to censor inappropriate or illegal material posted by users and hosted on their domains. "Considering that terrorist content is most harmful in the first hours of its appearance online, all companies should remove such content within one hour from its referral as a general rule," the EC said in a statement. The commission will also ask companies to report back on the degree of co-operation they receive from other organisations in order to determine whether stricter legislation is necessary. Most online media companies have clear rules in place warning users against publishing hate speech and routinely investigate and remove troubling content as soon as it is reported by users.


New AI technology used by UK government to fight extremist content

#artificialintelligence

The UK Home Office on Monday unveiled a £600,000 artificial intelligence (AI) tool to automatically detect terrorist content. The Home Office cited tests that show the new tool can automatically detect 94% of Daesh propaganda with 99.995% accuracy. That accuracy rate translates into only 50 out of one million randomly selected videos that would require human review. The tool can run on any platform and can integrate into the video upload process to stop most extremist content before it ever reaches the internet. The tool was developed by the Home Office and ASI Data Science.


Opinion: PornHub's new AI is wasted on adult content - AI News

#artificialintelligence

PornHub has announced an AI for tagging adult content, but I can't help but feel it demonstrates promising technology with better use elsewhere. The company's AI is virtually perving on PornHub's entire online catalogue, frame-by-frame, and tagging it with relevant tags for real users to discover content faster. It has been fed with thousands of images of models and specific acts to generate a database of names, faces, and positions. PornHub shared clips of the AI in action with Engadget who claim it was able to identify both the names of the performers in a scene, and what they were doing. "Tags such as'blowjob', 'doggy', 'cowgirl', and'missionary' floated on screen with the corresponding action," wrote Engadget.


Can Social Media And Artificial Intelligence Stop Terror By Using AI?

International Business Times

This article originally appeared on the Motley Fool. Once upon a time, terrorists used bombs, machetes, and bullets to get their message across. While that's still the case, modern day terror has a new tool at its disposal, one that it has become particularly adept and successful at deploying -- social media. This stark reality has come to light in the wake of terror campaigns that ended with participants pledging their support to their chosen causes and posting them on social-media platforms. Other insidious forms of communication and objectionable material have flourished in the internet era as well.


Facebook to Use AI to Block

#artificialintelligence

Amid growing pressure from governments, Facebook says it has stepped up its efforts to address the spread of "terrorist propaganda" on its service by using artificial intelligence (AI). In a blog post on Thursday, the California-based company announced the introduction of AI, including image matching and language understanding, in conjunction with it already-existing human reviewers to better identify and remove content "quickly". "We know we can do better at using technology - and specifically artificial intelligence - to stop the spread of terrorist content on Facebook," Monika Bickert, Facebook's director of global policy management, and Brian Fishman, the company's counterterrorism policy manager, said in the post. "Although our use of AI against terrorism is fairly recent, it's already changing the ways we keep potential terrorist propaganda and accounts off Facebook. "We want Facebook to be a hostile place for terrorists."